30 research outputs found

    New techniques for graph algorithms

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 181-192).The growing need to deal efficiently with massive computing tasks prompts us to consider the following question: How well can we solve fundamental optimization problems if our algorithms have to run really quickly? The motivation for the research presented in this thesis stems from addressing the above question in the context of algorithmic graph theory. To pursue this direction, we develop a toolkit that combines a diverse set of modern algorithmic techniques, including sparsification, low-stretch spanning trees, the multiplicative-weights-update method, dynamic graph algorithms, fast Laplacian system solvers, and tools of spectral graph theory. Using this toolkit, we obtain improved algorithms for several basic graph problems including: -- The Maximum s-t Flow and Minimum s-t Cut Problems. We develop a new approach to computing (1 - [epsilon])-approximately maximum s-t flow and (1 + [epsilon])-approximately minimum s-t cut in undirected graphs that gives the fastest known algorithms for these tasks. These algorithms are the first ones to improve the long-standing bound of O(n3/2') running time on sparse graphs; -- Multicommodity Flow Problems. We set forth a new method of speeding up the existing approximation algorithms for multicommodity flow problems, and use it to obtain the fastest-known (1 - [epsilon])-approximation algorithms for these problems. These results improve upon the best previously known bounds by a factor of roughly [omega](m/n), and make the resulting running times essentially match the [omega](mn) "flow-decomposition barrier" that is a natural obstacle to all the existing approaches; -- " Undirected (Multi-)Cut-Based Minimization Problems. We develop a general framework for designing fast approximation algorithms for (multi-)cutbased minimization problems in undirected graphs. Applying this framework leads to the first algorithms for several fundamental graph partitioning primitives, such as the (generalized) sparsest cut problem and the balanced separator problem, that run in close to linear time while still providing polylogarithmic approximation guarantees; -- The Asymmetric Traveling Salesman Problem. We design an O( )- approximation algorithm for the classical problem of combinatorial optimization: the asymmetric traveling salesman problem. This is the first asymptotic improvement over the long-standing approximation barrier of e(log n) for this problem; -- Random Spanning Tree Generation. We improve the bound on the time needed to generate an uniform random spanning tree of an undirected graph.by Aleksander Mądry.Ph.D

    Round Compression for Parallel Matching Algorithms

    Get PDF
    For over a decade now we have been witnessing the success of {\em massive parallel computation} (MPC) frameworks, such as MapReduce, Hadoop, Dryad, or Spark. One of the reasons for their success is the fact that these frameworks are able to accurately capture the nature of large-scale computation. In particular, compared to the classic distributed algorithms or PRAM models, these frameworks allow for much more local computation. The fundamental question that arises in this context is though: can we leverage this additional power to obtain even faster parallel algorithms? A prominent example here is the {\em maximum matching} problem---one of the most classic graph problems. It is well known that in the PRAM model one can compute a 2-approximate maximum matching in O(logn)O(\log{n}) rounds. However, the exact complexity of this problem in the MPC framework is still far from understood. Lattanzi et al. showed that if each machine has n1+Ω(1)n^{1+\Omega(1)} memory, this problem can also be solved 22-approximately in a constant number of rounds. These techniques, as well as the approaches developed in the follow up work, seem though to get stuck in a fundamental way at roughly O(logn)O(\log{n}) rounds once we enter the near-linear memory regime. It is thus entirely possible that in this regime, which captures in particular the case of sparse graph computations, the best MPC round complexity matches what one can already get in the PRAM model, without the need to take advantage of the extra local computation power. In this paper, we finally refute that perplexing possibility. That is, we break the above O(logn)O(\log n) round complexity bound even in the case of {\em slightly sublinear} memory per machine. In fact, our improvement here is {\em almost exponential}: we are able to deliver a (2+ϵ)(2+\epsilon)-approximation to maximum matching, for any fixed constant ϵ>0\epsilon>0, in O((loglogn)2)O((\log \log n)^2) rounds

    Adversarially Robust Generalization Requires More Data

    Full text link
    Machine learning models are often susceptible to adversarial perturbations of their inputs. Even small perturbations can cause state-of-the-art classifiers with high "standard" accuracy to produce an incorrect prediction with high confidence. To better understand this phenomenon, we study adversarially robust learning from the viewpoint of generalization. We show that already in a simple natural data model, the sample complexity of robust learning can be significantly larger than that of "standard" learning. This gap is information theoretic and holds irrespective of the training algorithm or the model family. We complement our theoretical results with experiments on popular image classification datasets and show that a similar gap exists here as well. We postulate that the difficulty of training robust classifiers stems, at least partially, from this inherently larger sample complexity.Comment: Small changes for biblatex compatibilit

    Trends for beta-blockers use in a large cohort of Polish hypertensive patients — Pol-Fokus Study

    Get PDF
    Background Beta-blockers remain one of the most frequently prescribed antihypertensive drug classes. The aim of the analysis was to evaluate characteristics of patients treated with beta-blockers and factors associated with the treatment of beta-blockers. Material and methods We analysed the data from the large cross-sectional study evaluating 12,375 patients treated for hypertension for at least one year. Results Overall, 7080 patients (57.2% of the whole group) were treated with beta-blockers. The rate of use of beta-blockers was higher in patients with diabetes (62.9 vs 55.6%), coronary artery disease (72.2 vs 46.4%), previous myocardial infarction (82.3 vs 54.1%), heart failure (73.1 vs 53.3%) and arrhythmias (73.1 vs 51.1%) than in patients without those conditions (all comparisons p < 0.001). Beta-blockers were used less frequently among patients with asthma/COPD than without asthma/COPD (54.0 vs 58.0%; p = 0.017). In patients aged 40 years and less, the compelling indications for these agents were found only in 21.7% of patients. In patients aged 40–65 years, none of compelling indications was found in 41.3% of patients. In patients 65 years or more, the most frequent compelling indications were coronary artery disease, previous myocardial infarction and heart failure, which were present in 70.1% of patients. Conclusions High utilization rate of beta-blockers in patients with hypertension, only second to renin-angiotensin blockers, has been shown. In middle age and, especially, in older patients it might reflect high cardiovascular burden of those patients, including coexistence of established cardiac disease. In younger patients beta-blockers are used more frequently with none of the compelling indications present

    Gradient Descent: The Mother of All Algorithms?

    No full text
    Presented as part of the Workshop on Algorithms and Randomness on May 14, 2018 at 11:30 a.m. in the Klaus Advanced Computing Building, Room 1116.Aleksander Mądry is an NBX Career Development Associate Professor of Computer Science in the MIT EECS Department. His research aims to identify and tackle key algorithmic challenges in today's computing. His goal is to develop theoretical ideas and tools that, ultimately, will change the way we approach optimization -- in all shapes and forms, both in theory and in practice.Runtime: 56:28 minutesMore than a half of century of research in theoretical computer science brought us a great wealth of advanced algorithmic techniques. These techniques can then be combined in a variety of ways to provide us with sophisticated, often beautifully elegant algorithms. This diversity of methods is truly stimulating and intellectually satisfying. But is it also necessary? In this talk, I will address this question by discussing one of the most, if not the most, fundamental continuous optimization techniques: the gradient descent method. I will briefly describe how this method can be applied, sometime in a quite non-obvious manner, to a number of classic algorithmic tasks, such as the maximum flow problem, the bipartite matching problems, the k-server problem, as well as matrix scaling and balancing. The resulting perspective will provide us with a broad, unifying view on this diverse set of problems. A perspective that was key to making the first in decades progress on each one of these tasks

    Fast Approximation Algorithms for Cut-based Problems in Undirected Graphs

    No full text
    We present a general method of designing fast approximation algorithms for cut-based minimization problems in undirected graphs. In particular, we develop a technique that given any such problem that can be approximated quickly on trees, allows approximating it almost as quickly on general graphs while only losing a poly-logarithmic factor in the approximation guarantee. To illustrate the applicability of our paradigm, we focus our attention on the undirected sparsest cut problem with general demands and the balanced separator problem. By a simple use of our framework, we obtain poly-logarithmic approximation algorithms for these problems that run in time close to linear. The main tool behind our result is an efficient procedure that decomposes general graphs into simpler ones while approximately preserving the cut-flow structure. This decomposition is inspired by the cut-based graph decomposition of Räcke that was developed in the context of oblivious routing schemes, as well as, by the construction of the ultrasparsifiers due to Spielman and Teng that was employed to preconditioning symmetric diagonally-dominant matrices.
    corecore